Stopping AI development is not in the U.S. interest; rather, the U.S. should lead the way in both development and regulation.
Stopping AI development is not in the U.S. interest; rather, the U.S. should lead the way in both development and regulation.
sla Sam Altman says AI development should be licensed. I try to find detailed sources in English as much as possible. I read TIMES and Forbes this time, but still different nuances (continued)
https://pbs.twimg.com/card_img/1658606261154512896/OvOXqk1b?format=jpg&name=medium#.png
sla The following three proposals were made by Sam at this Congressional hearing. ・Licensing system for developing AI models "with a certain level of functionality and authority".
・Development of safety standards for such high-performance AI (e.g., prohibition of self-replication and transfer)
・Audit by a third-party expert organization independent of both the developer and the government
On the other hand, he declined to name any guarantees of transparency in the AI model.
sla The hearing also included Professor Gary Marcus of NYU and C. Montogomery (Chief Privacy & Trust Officer) of IBM. They were of the opinion that transparency should also be regulated. Sam also agreed that AI companies, including OpenAI, should provide content creators with an opt-out from learning data.
sla The three parties were unanimous that it is not in the U.S. interest to stop AI development, but rather that it should lead the way in both development and regulation. There was no heated disagreement, with one council member saying, "I can't think of a precedent for the private side to recognize the dangers of the technology and agree to the regulations. On the other hand, there was a gap in mutual understanding on technology and regulations.
sla With a presidential election coming up next year, lawmakers were particularly concerned about the negative impact on the election (including their own failure to prevent social networking abuse in the past). Both sides agree that this public hearing is only the first step, and that we will continue to discuss how AI laws and regulations should be regulated.
sla It wouldn't be "OpenAI position talk" because we all agree on the need for regulation. They also say, "Regulations that benefit only big business are not good enough." Risky AI needs to be restricted regardless of who makes it, otherwise open source and free competition will not be restricted.
Forbes。
https://pbs.twimg.com/card_img/1658691302912061440/lo0M2aat?format=jpg&name=medium#.png
---
This page is auto-translated from /nishio/AI開発を止めることは米国の利益にならない、むしろ開発と規制の両軸で先導すべき using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.